31 research outputs found

    Frame of Reference Interaction.

    Get PDF
    We present a unified set of 3D interaction techniques that demonstrates an alternative way of thinking about the navigation of large virtual spaces in non-immersive environments. Our alternative conceptual framework views navigation from a cognitive perspective—as a way of facilitating changes in user attention from one reference frame to another—rather than from the mechanical perspective of moving a camera between different points of interest. All of our techniques link multiple frames of reference in some meaningful way. Some techniques link multiple windows within a zooming environment while others allow seamless changes of user attention between static objects, moving objects, and groups of moving objects. We present our techniques as they are implemented in GeoZui3D, a geographic visualization system for ocean dat

    Integrating Multiple 3D Views through Frame-of-reference Interaction

    Get PDF
    Frame-of-reference interaction consists of a unified set of 3D interaction techniques for exploratory navigation of large virtual spaces in nonimmersive environments. It is based on a conceptual framework that considers navigation from a cognitive perspective, as a way of facilitating changes in user attention from one reference frame to another, rather than from the mechanical perspective of moving a camera between different points of interest. All of our techniques link multiple frames of reference in some meaningful way. Some techniques link multiple windows within a zooming environment while others allow seamless changes of user focus between static objects, moving objects, and groups of moving objects. We present our techniques as they are implemented in GeoZui3D, a geographic visualization system for ocean data

    Linking focus and context in three-dimensional multiscale environments

    Get PDF
    The central question behind this dissertation is this: In what ways can 3D multiscale spatial information be presented in an interactive computer graphics environment, such that a human observer can better comprehend it? Toward answering this question, a two-pronged approach is employed that consists of practice within computer user-interface design, and theory grounded in perceptual psychology, bound together by an approach to the question in terms of focus and context as they apply to human attention. The major practical contribution of this dissertation is the development of a novel set of techniques for linking 3D windows to various kinds of reference frames in a virtual scene and to each other---linking one or more focal views with a view that provides context. Central to these techniques is the explicit recognition of the frames of reference inherent in objects, in computer-graphics viewpoint specifications, and in the human perception and cognitive understanding of space. Many of these techniques are incorporated into the GeoZui3D system as major extensions. An empirical evaluation of these techniques confirms the utility of 3D window proxy representations and orientation coupling. The major theoretical contribution is a cognitive systems model that predicts when linked focus and context views should be used over other techniques such as zooming. The predictive power of the model comes from explicit recognition of locations where a user will focus attention, as well as applied interpretations of the limitations of visual working memory. The model\u27s ability to predict performance is empirically validated, while its ability to model user error is empirically founded. Both the model and the results of the related experiments suggest that multiple linked windows can be an effective way of presenting multiscale spatial information, especially in situations involving the comparison of three or more objects. The contributions of the dissertation are discussed in the context of the applications that have motivated them

    Panoramic Images for Situational Awareness in a 3D Chart-of-the-Future Display

    Get PDF
    Many early charts featured sketches of the coastline, providing a good picture of what the shore looked like from the bridge of a ship. These helped the mariner to distinguish one port from another during an approach and establish their rough position within that approach. More recent experimental 3D chart interfaces have incorporated 3D models of land topography and man-made structures to perform the same function. However, topography is typically captured from the air, by means of stereophotogrammetry or lidar and fails to present a good representation of what is seen from a vessel’s bridge. We have been conducting an investigation of ways to present photographic imagery to the mariner to better capture the utility of the early coastline sketches. Our focus has been on navigation in restricted waters, using the Piscataqua River as a test area. This is part of our “Chart-of-the-Future” project being conducted by The Data Visualization Research Lab at the UNH Center for Coastal and Ocean Mapping. Through our investigation, we have developed a new method for presenting photographic imagery to the mariner, in the form of a series of panoramic images progressing down the channel. The panoramas consist of images stitched almost seamlessly together into circular arcs, whose centers are intended to be close to the position of a vessel’s bridge during transit. When viewed from this center, there is no distortion, and distortion increases to a maximum between two panorama centers. Our preliminary trials suggest that panoramas can provide an excellent supplement to electronic navigation aids by making them visible in the context of what can be seen out the window. We believe panoramas will be especially useful both in familiarizing a mariner with an unfamiliar approach during planning, and in enhancing situational awareness at times of reduced visibility such as in fog, dusk, or nightfall

    Linking Images and Sound in a 3D Museum Exhibit Demonstration

    Get PDF

    Fusing Information in a 3D Chart-of-the-Future Display

    Get PDF
    The Data Visualization Research Lab at the Center for Coastal and Ocean Mapping is investigating how three-dimensional navigational displays can most effectively be constructed. This effort is progressing along multiple paths and is implemented in the GeoNav3D system, a 3D chart-of-the-future research prototype. We present two lines of investigation here. First, we explore how tide, depth, and planning information can be combined (fused) into a single view, in order to give the user a more realistic picture of effective water depths. In the GeoNav3D system, 3D shaded bathymetry, coded for color depth, is used to display navigable areas. As in ENC displays, different colors are used to easily identify areas that are safe, areas where under-keel clearance is minimal, and areas where depths are too shallow. Real-time or model-generated tide information is taken into account in dynamically color-coding the depths. One advantage to using a continuous bathymetric model, versus discrete depth areas, is that the model can be continuously adjusted for water level. This concept is also extended for planning purposes by displaying the color-coded depths along a proposed corridor at the expected time of reaching each point. In our second line of investigation, we explore mechanisms for linking information from multiple 3D views into a coherent whole. In GeoNav3D, it is possible to create a variety of plan and perspective views, and these views can be attached to moving reference frames. This provides not only semi-static views such as from-the-bridge and under-keel along-track profile views, but also more dynamic, interactive views. These views are linked through visual devices that allow the fusion of information from among the views. We present several such devices and show how they highlight relevant details and help to minimize user confusion. Investigation into the utility of various linked views for aiding realsituation decision-making is ongoin

    Haptic-GeoZui3D: Exploring the Use of Haptics in AUV Path Planning

    Get PDF

    GeoZui3D: Data Fusion for Interpreting Oceanographic Data

    Get PDF
    GeoZui3D stands for Geographic Zooming User Interface. It is a new visualization software system designed for interpreting multiple sources of 3D data. The system supports gridded terrain models, triangular meshes, curtain plots, and a number of other display objects. A novel center of workspace interaction method unifies a number of aspects of the interface. It creates a simple viewpoint control method, it helps link multiple views, and is ideal for stereoscopic viewing. GeoZui3D has a number of features to support real-time input. Through a CORBA interface external entities can influence the position and state of objects in the display. Extra windows can be attached to moving objects allowing for their position and data to be monitored. We describe the application of this system for heterogeneous data fusion, for multibeam QC and for ROV/AUV monitoring

    Electronic Chart of the Future: The Hampton Roads Project

    Get PDF
    ECDIS is evolving from a two-dimensional static display of chart-related data to a decision support system capable of providing real-time or forecast information. While there may not be consensus on how this will occur, it is clear that to do this, ENC data and the shipboard display environment must incorporate both depth and time in an intuitively understandable way. Currently, we have the ability to conduct high-density hydrographic surveys capable of producing ENCs with decimeter contour intervals or depth areas. Yet, our existing systems and specifications do not provide for a full utilization of this capability. Ideally, a mariner should be able to benefit from detailed hydrographic data, coupled with both forecast and real-time water levels, and presented in a variety of perspectives. With this information mariners will be able to plan and carry out transits with the benefit of precisely determined and easily perceived underkeel, overhead, and lateral clearances. This paper describes a Hampton Roads Demonstration Project to investigate the challenges and opportunities of developing the “Electronic Chart of the Future.” In particular, a three-phase demonstration project is being planned: 1. Compile test datasets from existing and new hydrographic surveys using advanced data processing and compilation procedures developed at the University of New Hampshire’s Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC); 2. Investigate innovative approaches being developed at the CCOM/JHC to produce an interactive time- and tide-aware navigation display, and to evaluate such a display on commercial and/or government vessels; 3. Integrate real-time/forecast water depth information and port information services transmitted via an AIS communications broadcast

    Zooming vs. Multiple Window Interfaces: Cognitive Costs of Visual Comparisons

    No full text
    In order to investigate large information spaces effectively, it is often necessary to employ navigation mechanisms that allow users to view information at different scales. Some tasks require frequent movements and scale changes to search for details and compare them. We present a model that makes predictions about user performance on such comparison tasks with different interface options. A critical factor embodied in this model is the limited capacity of visual working memory, allowing for the cost of visits via fixating eye movements to be compared to the cost of visits that require user interaction with the mouse. This model is tested with an experiment that compares a zooming user interface with a multi-window interface for a multiscale pattern matching task. The results closely matched predictions in task performance times; however error rates were much higher with zooming than with multiple windows. We hypothesized that subjects made more visits in the multi-window condition, and ran a second experiment using an eye tracker to record the pattern of fixations. This revealed that subjects made far more visits back and forth between pattern locations when able to use eye movements than they made with the zooming interface. The results suggest that only a single graphical object was held in visual working memory for comparisons mediated by eye movements, reducing errors by reducing the load on visual working memory. Finally we propose a design heuristic: extra windows are needed when visual comparisons must be made involving patterns of a greater complexity than can be held in visual working memory
    corecore